AI Regulation
Japan proposed a a framework for global regulation of generative AI signed by 49 countries as of May 2024
October 2023: The Biden Administration issued “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”
According to the New York Times the initiative was led by “AI Czar” Kamala Harris
Politico March 2024 has a lengthy Inside the shadowy global battle to tame the world’s most dangerous technology. When noted anti-tech activist Tristan Harris claimed that Meta’s AI was showing how to build bombs, Zuckerberg pulled out his phone and found the same instructions via a Google Search.
“A licensing-based approach is the right way forward,” Natasha Crampton, Microsoft’s chief responsible AI officer, told POLITICO. “It allows a close interaction between the developer of the tech and the regulator to really assess the risks.”
and concludes
If 2023 was the year AI lobbying burst into the political mainstream, 2024 will decide who wins.
Key decision dates are fast approaching. By the summer, parts of Europe’s AI Act will come into force. The White House’s rival AI executive order is already changing how federal agencies handle the technology. More reforms in Washington are expected by the year’s end.
William Rinehart, analyst at the American Enterprise Institute offers his reasons AI regulation is a bad idea
The Gladstone Report, commissioned by the White House, says “Current frontier AI development poses urgent and growing risks to national security,”. The authors talked to more than 200 employees at top AI firms. It was written by a company that does technical education on AI for the government and finds — surprise! — that the government should spend way more on educating government officials.1.
Zvi goes through his standard long and meticulous review of the Gladstone Report and concludes that it misses the counter-arguments, especially the known problems of regulatory capture and mismanagement. But generally he takes it as a given that “AI Alignment” is a serious problem.
President Barack Obama spoke with The Verge in a wide-ranging conversation > The conversation also tackled First Amendment challenges and the necessity of a multi-faceted, adaptive regulatory approach for AI.
he disagrees with the idea that social networks are something called “common carriers” that have to distribute all information equally.
One of Obama’s worries is that the government needs insight and expertise to properly regulate AI, and you’ll hear him make a pitch for why people with that expertise should take a tour of duty in the government to make sure we get these things right.
Elad Gil writes There have been multiple call to regulate AI. It is too early to do so
Calls for regulation are:
self serving for the parties asking for it (it is not surprising the main incumbents say regulation is good for AI, as it will lock in their incumbency). Some notable counterexamples also exist where we should likely regulate things related to AI, but these are few and far between (e.g. export of advanced chip technology to foreign adversaries is a notable one).
Vox Aug 2023 The AI rules that US policymakers are considering, explained including the Algorithmic Accountability Act (FTC regulating claims made by GPTs), mandatory “safety audits”, licensing requirements, etc.
De Jure Regulation
Politico (March 2023) reports that Europe’s original plan to bring AI under control is no match for the technology’s new, shiny chatbot application.:
working to impose stricter requirements on both developers and users of ChatGPT and similar AI models, including managing the risk of the technology and being transparent about its workings. They are also trying to slap tougher restrictions on large service providers while keeping a lighter-tough regime for everyday users playing around with the technology.
De Facto Regulation
Several big companies are reminding their employees not to enter confidential information into ChatGPT:
JPMorgan joins Amazon, Verizon and Accenture in banning staff from using the chatbot.
Also see
California Regulation
California’s AI Bill SB 104-7
And a lengthy response by notable AI scientist Andrew Ng on why it’s a bad idea. (Summary). This bill fails to distinguish between technology and its application. Any technology, like say an electric motor, can lead to both good and bad outcomes, but it’s how it’s applied that makes the difference. You can use an electric motor to build a guided bomb, but rather than regulate all electric motors, laws should regulate bombs instead.
Washington State Proposed Regulation
#washington
Regulating artificial intelligence because it can be misused is similar to arguing that politicians should regulate the internet or the scientific process because they could be misused.
Rather than looking to limit AI’s innovation, legislators should focus on privacy and other concerns in the same way they would with other information technologies and tools.
via Establishing an AI task force is a bad idea #### Establishing an AI task force is a bad idea Whenever a new technology emerges, it can inspire a mix of excitement and fear. The bright new possibilities may encourage some, while others are prone to focus only on potential abuses. If you’re of a more pessimistic frame of mind, the idea of a government created task force to oversee development of artificial intelligence may strike you as a good idea. In reality, creating such a task force is more likely than not to hinder the innovative potential of AI to do good without mitigating many harms
Europe
Here’s the EU Official Compliance Checker
Guide for Developers explains the specifics of what AI developers need to know.
The regulation is primarily based on how risky your use case is rather than what technology you use. The act splits AI applications into 4 possible risk categories: prohibited, high, low and minimal:
- Prohibited AI Applications include systems that manipulate behaviour through subliminal techniques, exploit vulnerabilities, or engage in social scoring and real-time biometric identification in public spaces for law enforcement without strict justifications. Only governments can develop a prohibited application.
- High-Risk AI Systems: These are involved in critical sectors like healthcare, education, and employment, where there’s a significant impact on people’s safety or fundamental rights. This is the category of application most affected by the regulation.
- Limited Risk AI Systems: These are AI applications like chatbots or content generation. The main requirement here for compliance is transparency. The end-user should know that they’re interacting with AI.
- Minimal Risk AI Applications: This category covers the majority of AI applications such as AI in games, spam filters and recommendation engines.
Only the High-Risk applications have specific, onerous, requirements.